The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
黑盒机器学习模型被批评为缺乏可解释性,尽管它们往往具有良好的预测准确性。知识蒸馏(KD)是一种新兴工具,可以通过将知识提炼成透明模型来解释黑框模型。具有众所周知的解释优势,决策树是透明模型的竞争候选者。但是,对KD过程产生的决策树的理论或经验理解是有限的。在本文中,我们将这种决策树命名为蒸馏决策树(DDT),并为树结构稳定性的理论基础奠定了决定DDT解释的有效性的理论基础。我们证明,在某些温和的假设下,DDT的结构可以实现稳定(收敛性)。同时,我们开发了用于稳定DDT诱导的算法,提出了提高算法的计算效率的并行策略,并引入了一种边缘主体组件分析方法来克服采样中维度的诅咒。模拟和真实的数据研究证明了我们的理论结果,验证算法的疗效,并证明DDT可以在模型的预测准确性和可解释性之间取得良好的平衡。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
Pennylane是用于量子计算机可区分编程的Python 3软件框架。该库为近期量子计算设备提供了统一的体系结构,支持量子和连续变化的范例。 Pennylane的核心特征是能够以与经典技术(例如反向传播)兼容的方式来计算变异量子电路的梯度。因此,Pennylane扩展了在优化和机器学习中常见的自动分化算法,以包括量子和混合计算。插件系统使该框架与任何基于门的量子模拟器或硬件兼容。我们为硬件提供商提供插件,包括Xanadu Cloud,Amazon Braket和IBM Quantum,允许Pennylane优化在公开访问的量子设备上运行。在古典方面,Pennylane与加速的机器学习库(例如Tensorflow,Pytorch,Jax和Autograd)接口。 Pennylane可用于优化变分的量子本素体,量子近似优化,量子机学习模型和许多其他应用。
translated by 谷歌翻译
Electronic Health Records (EHRs) hold detailed longitudinal information about each patient's health status and general clinical history, a large portion of which is stored within the unstructured text. Temporal modelling of this medical history, which considers the sequence of events, can be used to forecast and simulate future events, estimate risk, suggest alternative diagnoses or forecast complications. While most prediction approaches use mainly structured data or a subset of single-domain forecasts and outcomes, we processed the entire free-text portion of EHRs for longitudinal modelling. We present Foresight, a novel GPT3-based pipeline that uses NER+L tools (i.e. MedCAT) to convert document text into structured, coded concepts, followed by providing probabilistic forecasts for future medical events such as disorders, medications, symptoms and interventions. Since large portions of EHR data are in text form, such an approach benefits from a granular and detailed view of a patient while introducing modest additional noise. On tests in two large UK hospitals (King's College Hospital, South London and Maudsley) and the US MIMIC-III dataset precision@10 of 0.80, 0.81 and 0.91 was achieved for forecasting the next biomedical concept. Foresight was also validated on 34 synthetic patient timelines by 5 clinicians and achieved relevancy of 97% for the top forecasted candidate disorder. Foresight can be easily trained and deployed locally as it only requires free-text data (as a minimum). As a generative model, it can simulate follow-on disorders, medications and interventions for as many steps as required. Foresight is a general-purpose model for biomedical concept modelling that can be used for real-world risk estimation, virtual trials and clinical research to study the progression of diseases, simulate interventions and counterfactuals, and for educational purposes.
translated by 谷歌翻译
We study the use of model-based reinforcement learning methods, in particular, world models for continual reinforcement learning. In continual reinforcement learning, an agent is required to solve one task and then another sequentially while retaining performance and preventing forgetting on past tasks. World models offer a task-agnostic solution: they do not require knowledge of task changes. World models are a straight-forward baseline for continual reinforcement learning for three main reasons. Firstly, forgetting in the world model is prevented by persisting existing experience replay buffers across tasks, experience from previous tasks is replayed for learning the world model. Secondly, they are sample efficient. Thirdly and finally, they offer a task-agnostic exploration strategy through the uncertainty in the trajectories generated by the world model. We show that world models are a simple and effective continual reinforcement learning baseline. We study their effectiveness on Minigrid and Minihack continual reinforcement learning benchmarks and show that it outperforms state of the art task-agnostic continual reinforcement learning methods.
translated by 谷歌翻译
文化领域代表了一个有用的概念,该概念在社会科学领域进行了交叉侵占。了解人类如何在社会中组织和联系他们的思想和行为有助于了解他们对不同问题的行为和态度。但是,塑造文化领域的共同特征的选择是任意的。所需的方法是一种可以利用大量在线数据(尤其是通过社交媒体)来识别没有临时假设,偏见或偏见的文化区域的方法。在这项工作中,我们通过引入一种基于微博帖子对大型数据集的自动分析来推断文化区域的方法来朝着这个方向迈出关键一步。我们的方法是基于以下原则:从人们之间讨论的主题可以推断出文化隶属关系。具体来说,我们衡量了美国社交媒体产生的书面话语中的区域差异。从地理标记的推文中内容词的频率分布,我们找到了“用法”区域热点,从那里我们得出了区域变化的主要成分。通过在这个较低维空间中数据的层次聚类,我们的方法得出了清晰的文化领域和定义它们的讨论主题。我们获得了一个明显的南北分离,主要受非裔美国人文化的影响,并进一步连续(东西方)和不连续的(城市农村)分裂,这些师为当今美国的文化领域提供了全面的了解。
translated by 谷歌翻译
强化学习(RL)为可以在现实世界中自主互动的培训代理提供了潜力。但是,一个关键限制是RL算法对核心超参数和网络体系结构选择的脆弱性。此外,诸如不断发展的训练数据和增加的代理复杂性等非平稳性意味着不同的超参数和体系结构在不同的训练点上可能是最佳的。这激发了Autorl,这是一种试图自动化这些设计选择的方法。一类突出的Autorl方法是基于人群的培训(PBT),这在几个大型设置中导致了令人印象深刻的表现。在本文中,我们介绍了PBT式方法中的两项新创新。首先,我们采用基于信任区域的贝叶斯优化,从而可以全面覆盖高维混合参数搜索空间。其次,我们表明,使用世代相传,我们还可以在一次训练中共同学习体系结构和超参数。利用新的高度可行的Brax物理引擎,我们表明这些创新导致了巨大的性能增长,在即时学习整个配置的同时,大大优于调谐基线。代码可在https://github.com/xingchenwan/bgpbt上找到。
translated by 谷歌翻译
随着机器学习(ML)系统变得越来越普遍,有必要在部署之前审核这些系统的偏见。最近的研究开发了算法,以有效地以可解释的,表现不佳的数据(或切片)的形式有效地识别相互偏见。但是,这些解决方案及其见解是有限的,而没有用于视觉理解和与这些算法结果相互作用的工具。我们提出了Visual Auditor,这是一种交互式可视化工具,用于审核和汇总模型偏差。视觉审核员通过提供可解释的交叉偏差概述(检查由多个功能定义的人群,有问题的数据切片之间的关系以及在模型中表现不佳和表现表现不佳之间的比较之间存在的详细信息)来协助模型验证。我们的开源工具直接在计算笔记本和Web浏览器中运行,使模型审核可访问并易于集成到当前的ML开发工作流中。一项与Fiddler AI的域专家合作的观察用户研究强调,我们的工具可以帮助ML实践者识别和理解模型偏见。
translated by 谷歌翻译
离线强化学习在利用大型预采用的数据集进行政策学习方面表现出了巨大的希望,使代理商可以放弃经常廉价的在线数据收集。但是,迄今为止,离线强化学习的探索相对较小,并且缺乏对剩余挑战所在的何处的了解。在本文中,我们试图建立简单的基线以在视觉域中连续控制。我们表明,对两个基于最先进的在线增强学习算法,Dreamerv2和DRQ-V2进行了简单的修改,足以超越事先工作并建立竞争性的基准。我们在现有的离线数据集中对这些算法进行了严格的评估,以及从视觉观察结果中进行离线强化学习的新测试台,更好地代表现实世界中离线增强学习问题中存在的数据分布,并开放我们的代码和数据以促进此方面的进度重要领域。最后,我们介绍并分析了来自视觉观察的离线RL所独有的几个关键Desiderata,包括视觉分散注意力和动态视觉上可识别的变化。
translated by 谷歌翻译